187 research outputs found
Role clarity deficiencies can wreck agile teams
Background
One of the twelve agile principles is to build projects around motivated individuals and trust them to get the job done. Such agile teams must self-organize, but this involves conflict, making self-organization difficult. One area of difficulty is agreeing on everybody’s role.
Background
What dynamics arise in a self-organizing team from the negotiation of everybody’s role?
Method
We conceptualize observations from five agile teams (work observations, interviews) by Charmazian Grounded Theory Methodology.
Results
We define role as something transient and implicit, not fixed and named. The roles are characterized by the responsibilities and expectations of each team member. Every team member must understand and accept their own roles (Local role clarity) and everbody else’s roles (Team-wide role clarity). Role clarity allows a team to work smoothly and effectively and to develop its members’ skills fast. Lack of role clarity creates friction that not only hampers the day-to-day work, but also appears to lead to high employee turnover. Agile coaches are critical to create and maintain role clarity.
Conclusions
Agile teams should pay close attention to the levels of Local role clarity of each member and Team-wide role clarity overall, because role clarity deficits are highly detrimental
a contest ; the web-development platform comparison
"Plat_Forms" is a competition in which top-class teams of three programmers
compete to implement the same requirements for a web-based system within 30
hours, each team using a different technology platform (Java EE, .NET, PHP,
Perl, Python, or Ruby on Rails). The results will provide new insights into
the real (rather than purported) pros, cons, and emergent properties of each
platform. The evaluation will analyze many aspects of each solution, both
external (usability, functionality, reliability, performance, etc.) and
internal (structure, understandability, flexibility, etc.)
bflinks: Reliable Bugfix links via bidirectional references and tuned heuristics
Background: Data from software data repositories such as source code version
archives and defect databases contains valuable information that can be used
for insights (leading to subsequent improvements), in particular defect
insertion circumstance analysis and defect prediction. The first step in such
analyses is identifying defect-correcting changes in the version archive
(bugfix commits) and linking them to corresponding entries in the defect
database, thus establishing bugfix links, in order to enrich the content of
the defect-correcting change with additional meta-data. Typically, identifying
the bugfix commits in a version archive is done via heuristic string matching
on the commit message. Research questions: Which filters could be used to
obtain a set of bugfix links? How does one set the cutoff parameters of each?
What effect (results loss and precision) does each filter then have? Which
overall precision, results loss, and recall is achieved? Method: We analyze a
comprehensive modular set of seven independent filters, including new ones
that make use of reverse links. We describe and evaluate visual heuristics
(based on simple diagnostic plots) for setting six filters' cutoff parameter.
We apply these to a commercial repository from the Web CMS domain and validate
the results with unprecendented precision by making use of a product expert to
manually verify over 2500 links. Results: The parameter selection heuristics
pick a very good parameter value in five of the six cases and a reasonably
good one in the sixth. As a result, the combined filtering, called bflinks,
proposes a set of bugfix links that has 93\% precision with only 7\% results
loss. Conclusion: The modular filtering approach can provide high-quality
results and can be adapted to repositories with different properties
The CuPit compiler for the MasPar MP-1 and MP-2: a literate programming document
This document contains the complete source code of the CuPit
compiler for the MasPar MP-1/MP-2 SIMD parallel machines. The
compiler is presented as a FunnelWeb literate programming document
that contains definitions for the various specification files needed
by the Eli compiler construction system.The exactly same set of
files that enabled FunnelWeb to produce this document also enable
Eli to produce the complete executable compiler, run time system,
and standard library. In this document the source code is
complemented by interspersed documentation text and several
larger introduction text blocks and appendices, in particular a
description of all errors found in the compiler during its
development and use.The compiler takes CuPit source code as input
and produces MPL source code as output. CuPit is a special purpose
language for neural network algorithms which dynamically change the
topology of the neural network. The compiler is designed to optimize
the irregular problems that arise when executing such algorithms for
both data locality and load balancing.The compiler can produce
several different versions of code: (1) a plain
do-as-good-as-you-can-without-any-tricks one (unoptimized), (2) one
that uses a better data distribution (statically optimized), (3) one
that contains additional instructions to collect information about
program behavior at run time, also known as the "rti version"
meaning "run time information version" (dynamically optimized)
PbT
This document contains the requirements for the system to be built by the
participants of the Plat_Forms 2007 contest. The system is called PbT (People
by Temperament). It is to be written within 30 hours by a team of three
people. For further details about the contest, please see www.plat-forms.org
- …